ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
My script will constantly run in the background looking for some
tcpdump filters, if it finds them, it'll put them in a file: /tmp/tcpdump.log
Once this file (/tmp/tcpdump.log) gets 1024 bytes long, it will move this file
to a new file (/tmp/tcpdump.log-date) & create a new tcpdump.log file and
start writing to that file.
I cannot seem to get the last bit working; roll-over the existing file and
start writing on a new file. Here's my script. Any help would be much
appreciated. Thanks.
Code:
#!/bin/bash
tcpdump not ssh 2>&1 > /tmp/tcpdump.log
# print the file size
LS="`ls -al /tmp/tcpdump.log | awk '{print $5}'`"
if [ $LS -gt "1024" ]; then
/bin/mv /tmp/tcpdump.log /tmp/tcpdump.log.`date +%d-%b-%Y`
touch /tmp/tcpdump.log
fi
See the -C option in tcpdump. It will automatically rollover files for you.
1024 bytes is *awfully* small for a dump file - thats less than 1 full packet!
Thanks for your help. My version of tcpdump (OpenBSD) don't have the -C flag. Also, I am more interested in doing this in the shell script to use the script later with some other program.
I know 1024 bytes is too small but I can always increase that value.
Thanks for your help again. Any further help on the code itself would be much appreciated.
In this scenario you will most likely have to test logrotate with copytruncate directive. But, since you're on OpenBSD, you should use PF for what you are trying to do -- much more flexible and robust and will probably save you time in the end.
How are you stopping tcpdump from outputting more than 1024 bytes? That isn't shown in your script. How do you know when it has output 1024 bytes? And where is the loop to look again if it is < 1024 bytes?
While renaming a log file in use isn't problematic as long as they are in the same file system, at some point, you need to signal tcpdump to stop dumping into the old file and start dumping into the new one. That isn't shown in your script.
See man stat for obtain size information on a file, but again, this is silly, because tcpdump also can output a maximum number of bytes. I'm sure OpenBSD has a port of a more feature-rich tcpdump. I installed mine from pkgsrc on NetBSD.
As I suspected, the copytruncate directive is needed in this case. If you remove it from the logrotate config, it will not work. This is on a Linux box, but it should be the same for you on OpenBSD.
My mistake - I thought you were trying to rotate the binary data files (eg. -w file) rather than just the STDOUT.
Two processes modifying an open file at the same time have unpredictable results. The copytruncate option is iffy at best.
Good luck.
copytruncate is the only option in this instance because the file is open. In general it works pretty good, but that's why I suggested he use PF if he is not able to install a more feature-rich tcpdump.
[QUOTE=jcookeman;3216743]As I suspected, the copytruncate directive is needed in this case. If you remove it from the logrotate config, it will not work. This is on a Linux box, but it should be the same for you on OpenBSD.
Thanks for your script. This has been real helpful. I modified the script a bit and ran and I can see some weird things happening:
- as soon as tcpdump.log reaches 1024 bytes, the script copies the content of tcpdump.log to 5 other files tcpdump.log.1 tcpdump.log.2 etc. I know this is because the rotate 5 option on logrotate. But all I want it to copytruncate only once to tcpdump.log.1 & when tcpdump.log reaches 1024 bytes again, copytruncate again (once) to tcpdump.log.2
- after the copytruncate is done, the tcpdump.log file goes to 0 bytes and then goes to 1024 bytes & increasing - this goes on and on; that is, ls keeps reporting the file size 0 bytes and then suddenly >= 1024 bytes.
Here is my new script. Would really appreciate further help. Thanks.
Last edited by noir911; 07-24-2008 at 06:14 PM.
Reason: issue resolved now with two scripts
When a file is opened by a process, it has a file pointer of where to perform its *next* write. In your script, the shell has $FILE open for the redirection. After the shell has written byte 1024, its file pointer is set to write starting at byte 1025. Since you truncated the file, the next write by the shell will create a sparse file, where bytes 0-1024 are empty, and tcpdump's redirected output will continue at bytes 1025+. (this is a generalization of the concept; I've omitted some details for the sake of simplicity). Let's confirm with some file sizes:
The astute observer will also notice that the tcpdump.log.# file sizes do not exactly equal 1024, but jump in size according the I/O buffer size and when it is flushed. On my system, the file size delta between consecutive tcpdump.log.# files is 16K (some runs have 32K or even 64K deltas, depending upon timing, and whether or not -l is used with tcpdump).
As I have been trying to tell you, you cannot force another process to change or reset its internal file pointer. I'll state it again - two random processes CANNOT write to the same file at the same time as the results are undefined. I hope you now understand this basic lesson.
So you need a different approach.
One approach is to write your own read-n-rotate program, that reads and writes its STDIN, and rotates its output file every N bytes. This can easily be done in C, perl, or your language of choice. This way - you control the open and close of the output files.
Another approach is to use tcpdump's -w, -C option and -W options to create N files, each of a specified maximum size. tcpdump will automatically close one file and open the next file for output. Using -W allows you to create a ring of files (eg. 1, 2, 3, 1, 2, 3, ...). When tcpdump has finished writing a file, it will advance to the next. You are now safe to read the file with tcpdump -r, and you can delete the file too. How do you know when to read the next file ? Answer: after tcpdump has create the file N+1 mod the max number of files you specify with -W. This is how you manage a ringer buffer with one writer and one reader (called the producer/consumer problem in computer science).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.